Recognizing textual entailment is a fundamental task in a variety of textmining or natural language processing applications. This paper proposes asimple neural model for RTE problem. It first matches each word in thehypothesis with its most-similar word in the premise, producing an augmentedrepresentation of the hypothesis conditioned on the premise as a sequence ofword pairs. The LSTM model is then used to model this augmented sequence, andthe final output from the LSTM is fed into a softmax layer to make theprediction. Besides the base model, in order to enhance its performance, wealso proposed three techniques: the integration of multiple word-embeddinglibrary, bi-way integration, and ensemble based on model averaging.Experimental results on the SNLI dataset have shown that the three techniquesare effective in boosting the predicative accuracy and that our methodoutperforms several state-of-the-state ones.
展开▼